Limitation of Learning Rankings from Partial Information
نویسندگان
چکیده
1.1. Notations. Let n be the number of elements and Sn be set of all possible n! permutations or rankings of these of n elements. Our interest is in learning non-negative valued functions f defined on Sn, i.e. f : Sn → R+, where R+ = {x ∈ R : x ≥ 0}. The support of f is defined as supp (f) = {σ ∈ Sn : f(σ) 6= 0} . The cardinality of support, | supp (f) | is called the sparsity of f and denoted by K. We also call it the l0 norm of f , denoted by |f |0. In this paper, our goal is to learn f from partial information. In order to formally define the partial information we consider, we need some notations. To this end, consider a partition of n, i.e. an ordered tuple λ = (λ1, λ2, . . . , λr), such that λ1 ≥ λ2 ≥ . . . ≥ λr ≥ 1, and n = λ1+λ2+. . .+λr. For example, λ = (n − 1, 1) is a partition of n. Now consider a partition of the n elements, {1, . . . , n}, as per the λ partition, i.e. divide n elements into r bins with ith bin having λi elements. It is easy to see that n elements can be divided as per the λ partition in Dλ distinct ways, with
منابع مشابه
Riffled Independence for Efficient Inference with Partial Rankings
Distributions over rankings are used to model data in a multitude of real world settings such as preference analysis and political elections. Modeling such distributions presents several computational challenges, however, due to the factorial size of the set of rankings over an item set. Some of these challenges are quite familiar to the artificial intelligence community, such as how to compact...
متن کاملClustering and Prediction of Rankings Within a Kemeny Distance Framework
Rankings and partial rankings are ubiquitous in data analysis, yet there is relatively little work in the classification community that uses the typical properties of rankings. We review the broader literature that we are aware of, and identify a common building block for both prediction of rankings and clustering of rankings, which is also valid for partial rankings. This building block is the...
متن کاملPairwise Preference Learning and Ranking
We consider supervised learning of a ranking function, which is a mapping from instances to total orders over a set of labels (options). The training information consists of examples with partial (and possibly inconsistent) information about their associated rankings. From these, we induce a ranking function by reducing the original problem to a number of binary classification problems, one for...
متن کاملThe Kendall and Mallows Kernels for Permutations
We show that the widely used Kendall tau correlation coefficient, and the related Mallows kernel, are positive definite kernels for permutations. They offer computationally attractive alternatives to more complex kernels on the symmetric group to learn from rankings, or learn to rank. We show how to extend these kernels to partial rankings, multivariate rankings and uncertain rankings. Examples...
متن کاملÖsterreichisches Forschungsinstitut für / Austrian Research Institute for / Artificial Intelligence TR – 2003 – 26
We consider supervised learning of a ranking function, which is a mapping from instances to total orders over a set of labels (options). The training information consists of examples with partial (and possibly inconsistent) information about their associated rankings. From these, we induce a ranking function by reducing the original problem to a number of binary classification problems, one for...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010